robust accuracy
- North America > United States > Maryland (0.04)
- North America > Canada (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Asia > Macao (0.04)
- (25 more...)
- Information Technology > Security & Privacy (0.36)
- Government > Military (0.36)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.92)
- Information Technology > Data Science (0.68)
- North America > United States > Illinois (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Information Technology > Security & Privacy (0.68)
- Government (0.68)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Spain > Andalusia > Granada Province > Granada (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- Information Technology (0.69)
- Health & Medicine > Diagnostic Medicine (0.46)
Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization
However, SSA T suffers from catastrophic overfit-ting (CO), a phenomenon that leads to a severely distorted classifier, making it vulnerable to multi-step adversarial attacks. In this work, we observe that some adversarial examples generated on the SSA T -trained network exhibit anomalous behaviour, that is, although these training samples are generated by the inner maximization process, their associated loss decreases instead, which we named abnormal adversarial examples (AAEs).
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > Canada > British Columbia > Vancouver Island > Capital Regional District > Victoria (0.04)
Supplementary Materia: Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View Zhiyu Lin
Fan et al. [2021] incorporates high-frequency views into contrastive learning, leading to the transfer However, there are also several works that challenge the validity of this assumption. Yin et al. [2019] proposes a robustness analysis strategy based on Fourier Heatmaps, which utilizes a model's sensitivity to frequency-bases. Maiya et al. [2021] believes that model robustness does not have an intrinsic connection In addition to the perspective on frequency components, Chen et al. [2021] has shown that the CNN model should be consistent with the Human Visual System, with To show the power law distribution of natural images, we select CIFAR-10 Krizhevsky et al. [2009], Tiny-ImageNet Le and Y ang [2015] and ImageNet Deng et al. [2009] to conduct experiments. We show an example of division on ImageNet, as shown in Fig.2, in which the high-and low-frequency components of the image obtained according to the division radius are also in line with our We conduct experiments on naturally trained models. We conduct experiments on test set of CIFAR10, Tiny-ImageNet, ImageNet-1k datasets.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Asia > China > Beijing > Beijing (0.04)